A proximal gradient method for control problems with non-smooth and non-convex control cost

نویسندگان

چکیده

Abstract We investigate the convergence of proximal gradient method applied to control problems with non-smooth and non-convex cost. Here, we focus on cost functionals that promote sparsity, which includes $$L^p$$ L p -type for $$p\in [0,1)$$ ∈ [ 0 , 1 ) . prove stationarity properties weak limit points method. These are weaker than those provided by Pontryagin’s maximum principle L -stationarity.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Inexact Proximal Gradient Methods for Non-convex and Non-smooth Optimization

Non-convex and non-smooth optimization plays an important role in machine learning. Proximal gradient method is one of the most important methods for solving the nonconvex and non-smooth problems, where a proximal operator need to be solved exactly for each step. However, in a lot of problems the proximal operator does not have an analytic solution, or is expensive to obtain an exact solution. ...

متن کامل

A Bundle Method for Solving Convex Non-smooth Minimization Problems

Numerical experiences show that bundle methods are very efficient for solving convex non-smooth optimization problems. In this paper we describe briefly the mathematical background of a bundle method and discuss practical aspects for the numerical implementation. Further, we give a detailed documentation of our implementation and report about numerical tests.

متن کامل

Convex Optimal Control Problems with Smooth Hamiltonians

Optimal control problems with convex costs, for which Hamiltonians have Lipschitz continuous gradients, are considered. Examples of such problems, including extensions of the linear-quadratic regulator with hard and possibly state-dependent control constraints, and piecewise linear-quadratic penalties are given. Lipschitz continuous differentiability and strong convexity of the terminal cost ar...

متن کامل

Alternating Proximal Gradient Method for Convex Minimization

In this paper, we propose an alternating proximal gradient method that solves convex minimization problems with three or more separable blocks in the objective function. Our method is based on the framework of alternating direction method of multipliers. The main computational effort in each iteration of the proposed method is to compute the proximal mappings of the involved convex functions. T...

متن کامل

General Proximal Gradient Method: A Case for Non-Euclidean Norms

In this paper, we consider composite convex minimization problems. We advocate the merit of considering Generalized Proximal gradient Methods (GPM) where the norm employed is not Euclidean. To that end, we show the tractability of the general proximity operator for a broad class of structure priors by proposing a polynomial-time approach to approximately compute it. We also identify a special c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2021

ISSN: ['0926-6003', '1573-2894']

DOI: https://doi.org/10.1007/s10589-021-00308-0